Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Comput Biol Med ; 156: 106668, 2023 04.
Article in English | MEDLINE | ID: covidwho-2273859

ABSTRACT

Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.


Subject(s)
Artificial Intelligence , Neural Networks, Computer , Humans , Diagnostic Imaging , Algorithms , Machine Learning
2.
24th International Conference on Human-Computer Interaction, HCII 2022 ; 1654 CCIS:119-127, 2022.
Article in English | Scopus | ID: covidwho-2173706

ABSTRACT

Using Machine Learning and Deep Learning to predict cognitive tasks from electroencephalography (EEG) signals has been a fast-developing area in Brain-Computer Interfaces (BCI). However, during the COVID-19 pandemic, data collection and analysis could be more challenging than before. This paper explored machine learning algorithms that can run efficiently on personal computers for BCI classification tasks. Also, we investigated a way to conduct such BCI experiments remotely via Zoom. The results showed that Random Forest and RBF SVM performed well for EEG classification tasks. The remote experiment during the pandemic yielded several challenges, and we discussed the possible solutions;nevertheless, we developed a protocol that grants non-experts who are interested a guideline for such data collection. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

3.
2022 Genetic and Evolutionary Computation Conference, GECCO 2022 ; : 1763-1769, 2022.
Article in English | Scopus | ID: covidwho-2020380

ABSTRACT

Since the first wave of the COVID-19 pandemic, governments have applied restrictions in order to slow down its spreading. However, creating such policies is hard, especially because the government needs to trade-off the spreading of the pandemic with the economic losses. For this reason, several works have applied machine learning techniques, often with the help of special-purpose simulators, to generate policies that were more effective than the ones obtained by governments. While the performance of such approaches are promising, they suffer from a fundamental issue: since such approaches are based on black-box machine learning, their real-world applicability is limited, because these policies cannot be analyzed, nor tested, and thus they are not trustable. In this work, we employ a recently developed hybrid approach, which combines reinforcement learning with evolutionary computation, for the generation of interpretable policies for containing the pandemic. These policies, trained on an existing simulator, aim to reduce the spreading of the pandemic while minimizing the economic losses. Our results show that our approach is able to find solutions that are extremely simple, yet very powerful. In fact, our approach has significantly better performance (in simulated scenarios) than both previous work and government policies. © 2022 ACM.

4.
4th International Conference on Bio-Engineering for Smart Technologies, BioSMART 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1730903

ABSTRACT

COVID-19 has caused immense social and economic losses throughout the world. Subjects recovered from COVID are learned to have complications. Some studies have shown a change in the heart rate variability (HRV) in COVID-recovered subjects compared to the healthy ones. This change indicates an increased risk of heart problems among the survivors of moderate-to-severe COVID. Hence, this study is aimed at finding HRV features that get altered in COVID-recovered subjects compared to healthy subjects. Data of COVID-recovered and healthy subjects were collected from two hospitals in Delhi, India. Seven ML models have been built to classify healthy versus COVID-recovered subjects. The best-performing model was further analyzed to explore the ranking of altered heart features in COVID-recovered subjects via AI interpretability. Ranking of these features can indicate cardiovascular health status to doctors, who can provide support to the COVID-recovered subjects for timely safeguard from heart disorders. To the best of our knowledge, this is the first study with an in-depth analysis of the heart status of COVID-recovered subjects via ECG analysis. © 2021 IEEE.

5.
17th International Conference on Network and Service Management, CNSM 2021 ; : 8-13, 2021.
Article in English | Scopus | ID: covidwho-1662994

ABSTRACT

The COVID-19 emergency has made the consumption of multimedia content skyrocket in all contexts, including education. Many universities leverage hybrid learning models, in which students join a real-time video session via Wi-Fi from several classrooms to ensure safety and social distancing. This is creating a significant strain on the wireless access network, which is required to deliver an unusually high level of traffic. Artificial Intelligence (AI) and Machine Learning (ML) solutions have emerged as a way to make networks easier to control and to manage. However, their black box nature and in general their fire and forget approach has generated considerable skepticism over the entire value chain, from vendors to network administrators. This situation has led to a new interest in interpretable AI solutions, which aim at making the decisions taken by AI/ML models intelligible to a domain expert. In this article, we review the concept of interpretable AI and analyze the challenges, requirements, and benefits it can bring to delay-sensitive content delivery in 802.11 Wi-Fi networks. Furthermore, we apply these requirements to a use case in which we focus on advanced Quality of Service (QoS) provision, and we propose an interpretable and low-complexity ML model that addresses those requirements. The results demonstrate performance gains up to 60% in the sensitive traffic and up to 20% at network-wide level. © 2021 IFIP.

SELECTION OF CITATIONS
SEARCH DETAIL